Online Learning in a Chemical Perceptron
نویسندگان
چکیده
Autonomous learning implemented purely by means of a synthetic chemical system has not been previously realized. Learning promotes reusability and minimizes the system design to simple input-output specification. In this article we introduce a chemical perceptron, the first full-featured implementation of a perceptron in an artificial (simulated) chemistry. A perceptron is the simplest system capable of learning, inspired by the functioning of a biological neuron. Our artificial chemistry is deterministic and discrete-time, and follows Michaelis-Menten kinetics. We present two models, the weight-loop perceptron and the weight-race perceptron, which represent two possible strategies for a chemical implementation of linear integration and threshold. Both chemical perceptrons can successfully identify all 14 linearly separable two-input logic functions and maintain high robustness against rate-constant perturbations. We suggest that DNA strand displacement could, in principle, provide an implementation substrate for our model, allowing the chemical perceptron to perform reusable, programmable, and adaptable wet biochemical computing.
منابع مشابه
The Forgetron: A Kernel-Based Perceptron on a Fixed Budget
The Perceptron algorithm, despite its simplicity, often performs well on online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may grow unboundedly. In this paper we presen...
متن کاملBayesian online learning in
In a Bayesian approach to online learning a simple approximate parametric form for posterior is updated in each online learning step. Usually in online learning only an estimate of the solution is updated. The Bayesian online approach is applied to two simple learning scenarios, learning a perceptron rule with respectively a spherical and a binary weight prior. In the rst case we rederive the r...
متن کاملPerceptron-like Algorithms and Generalization Bounds for Learning to Rank
Learning to rank is a supervised learning problem where the output space is the space of rankings but the supervision space is the space of relevance scores. We make theoretical contributions to the learning to rank problem both in the online and batch settings. First, we propose a perceptron-like algorithm for learning a ranking function in an online setting. Our algorithm is an extension of t...
متن کاملPerceptron like Algorithms for Online Learning to Rank
Perceptron is a classic online algorithm for learning a classification function. In this paper, we provide a novel extension of the perceptron algorithm to the learning to rank problem in information retrieval. We consider popular listwise performance measures such as Normalized Discounted Cumulative Gain (NDCG) and Average Precision (AP). A modern perspective on perceptron for classification i...
متن کاملOptimal Bayesian Online Learning
In a Bayesian approach to online learning a simple paramet-ric approximate posterior over rules is updated in each online learning step. Predictions on new data are derived from averages over this posterior. This should be compared to the Bayes optimal batch (or ooine) approach for which the posterior is calculated from the prior and the likelihood of the whole training set. We suggest that min...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Artificial life
دوره 19 2 شماره
صفحات -
تاریخ انتشار 2013